Rigorous guarantees about the performance of predictive algorithms are necessary in order to ensure their responsible use. Previous work has largely focused on bounding the expected loss of a predictor, but this is not sufficient in many risk-sensitive applications where the distribution of errors is important. In this work, we propose a flexible framework to produce a family of bounds on quantiles of the loss distribution incurred by a predictor. Our method takes advantage of the order statistics of the observed loss values rather than relying on the sample mean alone. We show that a quantile is an informative way of quantifying predictive performance, and that our framework applies to a variety of quantile-based metrics, each targeting important subsets of the data distribution. We analyze the theoretical properties of our proposed method and demonstrate its ability to rigorously control loss quantiles on several real-world datasets.
translated by 谷歌翻译
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-theart results on the CU-Birds dataset.
translated by 谷歌翻译
Federated learning (FL) enables distributed model training from local data collected by users. In distributed systems with constrained resources and potentially high dynamics, e.g., mobile edge networks, the efficiency of FL is an important problem. Existing works have separately considered different configurations to make FL more efficient, such as infrequent transmission of model updates, client subsampling, and compression of update vectors. However, an important open problem is how to jointly apply and tune these control knobs in a single FL algorithm, to achieve the best performance by allowing a high degree of freedom in control decisions. In this paper, we address this problem and propose FlexFL - an FL algorithm with multiple options that can be adjusted flexibly. Our FlexFL algorithm allows both arbitrary rates of local computation at clients and arbitrary amounts of communication between clients and the server, making both the computation and communication resource consumption adjustable. We prove a convergence upper bound of this algorithm. Based on this result, we further propose a stochastic optimization formulation and algorithm to determine the control decisions that (approximately) minimize the convergence bound, while conforming to constraints related to resource consumption. The advantage of our approach is also verified using experiments.
translated by 谷歌翻译
As causal inference becomes more widespread the importance of having good tools to test for causal effects increases. In this work we focus on the problem of testing for causal effects that manifest in a difference in distribution for treatment and control. We build on work applying kernel methods to causality, considering the previously introduced Counterfactual Mean Embedding framework (\textsc{CfME}). We improve on this by proposing the \emph{Doubly Robust Counterfactual Mean Embedding} (\textsc{DR-CfME}), which has better theoretical properties than its predecessor by leveraging semiparametric theory. This leads us to propose new kernel based test statistics for distributional effects which are based upon doubly robust estimators of treatment effects. We propose two test statistics, one which is a direct improvement on previous work and one which can be applied even when the support of the treatment arm is a subset of that of the control arm. We demonstrate the validity of our methods on simulated and real-world data, as well as giving an application in off-policy evaluation.
translated by 谷歌翻译
Deep reinforcement learning algorithms have succeeded in several challenging domains. Classic Online RL job schedulers can learn efficient scheduling strategies but often takes thousands of timesteps to explore the environment and adapt from a randomly initialized DNN policy. Existing RL schedulers overlook the importance of learning from historical data and improving upon custom heuristic policies. Offline reinforcement learning presents the prospect of policy optimization from pre-recorded datasets without online environment interaction. Following the recent success of data-driven learning, we explore two RL methods: 1) Behaviour Cloning and 2) Offline RL, which aim to learn policies from logged data without interacting with the environment. These methods address the challenges concerning the cost of data collection and safety, particularly pertinent to real-world applications of RL. Although the data-driven RL methods generate good results, we show that the performance is highly dependent on the quality of the historical datasets. Finally, we demonstrate that by effectively incorporating prior expert demonstrations to pre-train the agent, we short-circuit the random exploration phase to learn a reasonable policy with online training. We utilize Offline RL as a \textbf{launchpad} to learn effective scheduling policies from prior experience collected using Oracle or heuristic policies. Such a framework is effective for pre-training from historical datasets and well suited to continuous improvement with online data collection.
translated by 谷歌翻译
The exponential growth in demand for digital services drives massive datacenter energy consumption and negative environmental impacts. Promoting sustainable solutions to pressing energy and digital infrastructure challenges is crucial. Several hyperscale cloud providers have announced plans to power their datacenters using renewable energy. However, integrating renewables to power the datacenters is challenging because the power generation is intermittent, necessitating approaches to tackle power supply variability. Hand engineering domain-specific heuristics-based schedulers to meet specific objective functions in such complex dynamic green datacenter environments is time-consuming, expensive, and requires extensive tuning by domain experts. The green datacenters need smart systems and system software to employ multiple renewable energy sources (wind and solar) by intelligently adapting computing to renewable energy generation. We present RARE (Renewable energy Aware REsource management), a Deep Reinforcement Learning (DRL) job scheduler that automatically learns effective job scheduling policies while continually adapting to datacenters' complex dynamic environment. The resulting DRL scheduler performs better than heuristic scheduling policies with different workloads and adapts to the intermittent power supply from renewables. We demonstrate DRL scheduler system design parameters that, when tuned correctly, produce better performance. Finally, we demonstrate that the DRL scheduler can learn from and improve upon existing heuristic policies using Offline Learning.
translated by 谷歌翻译
机械系统自然地在描述其固有对称性的主束上演变。随之而来的配置歧管分解为对称组和内部形状空间,为许多机器人和生物系统的运动提供了深刻的见解。另一方面,差异平坦的属性已实现了各种机器人系统的有效,有效的计划和控制算法。然而,为任意机器人系统找到平坦输出的实际手段仍然是一个悬而未决的问题。在这项工作中,我们在这两个域之间展示了令人惊讶的新连接,这是首次使用对称性直接使用对称性来构建平面输出。我们为捆绑包的琐碎化提供了足够的条件,其中组变量本身是平坦的输出。我们将其称为几何扁平输出,因为它是均衡的(即保持对称性的),并且通常是全局或几乎全球的,因此通常不受其他平坦输出不享受的属性。在这样的琐碎化中,很容易解决运动计划问题,因为组变量的给定轨迹将充分确定精确实现此运动的形状变量的轨迹。我们为机器人系统提供了部分目录,该目录具有几何扁平输出,并为平面火箭,平面空中操纵器和四极管提供了示例。
translated by 谷歌翻译
尽管进行了数十年的研究,但现有的导航系统在野外部署时仍然面临现实世界中的挑战,例如在混乱的家庭环境或人类占领的公共场所中。为了解决这个问题,我们提出了一类新的隐式控制政策,将模仿学习的好处与模型预测控制(MPC)的系统约束的强大处理结合在一起。我们的方法称为Performer-MPC,使用了通过表演者提供的视觉上下文嵌入的学习成本函数(一种低级隐式意见变压器)。我们共同训练成本函数并构建依靠它的控制器,有效地端到端解决相应的双层优化问题。我们表明,由此产生的策略通过利用一些在不同挑战的现实世界情景中利用一些专家演示来提高标准MPC绩效。与标准的MPC政策相比,表演者MPC在混乱的环境中实现了40%的目标,而在人类浏览时,社交指标的目标> 65%。
translated by 谷歌翻译
由于对不同部门的电子芯片的需求不断增长,因此,半导体公司被授权离岸其制造流程。这一不必要的事情使他们对筹码的筹码有关,并引起了硬件攻击的创造。在这种情况下,半导体供应链中的不同实体可以恶意行事,并对从设备到系统的设计计算层进行攻击。我们的攻击是一个硬件特洛伊木马,在不受信任的铸造厂中插入了在面具的生成/制造过程中。特洛伊木马在制造,通过添加,删除或设计单元的变化中留下了脚印。为了解决这个问题,我们在这项工作中提出了可解释的视觉系统,用于硬件测试和保证(EVHA),可以检测以低成本,准确和快速的方式对设计的最小变化。该系统的输入是从正在检查的集成电路(IC)中获取的扫描电子显微镜(SEM)图像。系统输出是通过添加,删除或在单元格级的设计单元格中使用任何缺陷和/或硬件木马来确定IC状态。本文概述了我们的防御系统的设计,开发,实施和分析。
translated by 谷歌翻译
一种共同的销售策略涉及让帐户高管(AES)积极联系并与潜在客户联系。但是,并非所有的接触尝试都有积极的效果:有些尝试不会改变客户的决策,而另一些尝试甚至可能会干扰所需的结果。在这项工作中,我们建议使用因果推断来估计与每个潜在客户联系并相应地制定联系政策的效果。我们从在线珠宝市场worthy.com上证明了这种方法。我们研究了有价值的业务流程,以确定相关的决策和结果,并对他们制定的方式进行正式的假设。使用因果工具,我们选择了一个决策点,改善AE接触活动似乎是有希望的。然后,我们制定了一个个性化的政策,建议仅与对其有益的客户联系。最后,我们在3个月内验证了A \ B测试中的结果,从而导致目标人群的项目交付率增加了22%(p值= 0.026)。现在,该政策正在持续使用。
translated by 谷歌翻译